65 research outputs found

    Adversarially Tuned Scene Generation

    Full text link
    Generalization performance of trained computer vision systems that use computer graphics (CG) generated data is not yet effective due to the concept of 'domain-shift' between virtual and real data. Although simulated data augmented with a few real world samples has been shown to mitigate domain shift and improve transferability of trained models, guiding or bootstrapping the virtual data generation with the distributions learnt from target real world domain is desired, especially in the fields where annotating even few real images is laborious (such as semantic labeling, and intrinsic images etc.). In order to address this problem in an unsupervised manner, our work combines recent advances in CG (which aims to generate stochastic scene layouts coupled with large collections of 3D object models) and generative adversarial training (which aims train generative models by measuring discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier). Our method uses iterative estimation of the posterior density of prior distributions for a generative graphical model. This is done within a rejection sampling framework. Initially, we assume uniform distributions as priors on the parameters of a scene described by a generative graphical model. As iterations proceed the prior distributions get updated to distributions that are closer to the (unknown) distributions of target data. We demonstrate the utility of adversarially tuned scene generation on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene semantic labeling with a deep convolutional net (DeepLab). We realized performance improvements by 2.28 and 3.14 points (using the IoU metric) between the DeepLab models trained on simulated sets prepared from the scene generation models before and after tuning to CityScapes and CamVid respectively.Comment: 9 pages, accepted at CVPR 201

    Determining Object Orientations from Run Length Encodings

    Get PDF
    Run length codes are widely used for image compression, and efficient algorithms have been devised for identifying objects and calculating geometric features directly from these codes. However, if the image objects are rotated it can be difficult to determine their orientation and position so that they can be grasped by manipulators. This paper describes a method for structural determination of object orientation directly from the run length codes of successive image scan lines. An algorithm is described that makes use of the equations of object boundary segments to form hypotheses about object orientations that are refined as scanning progresses. 2-dimensional polygonal objects are discussed, and it is assumed that objects do not touch or overlap, although the algorithm could be extended to include those situations

    Effect of Several Auxiliary Ligands on the Extraction of Manganese(II) With 4-Benzoyl-3-methyl-1-phenyl-5-pyrazolone

    Get PDF
    The effect of 2-methyl pyridine N-oxide, 4-methyl pyridine N-oxide, pyridine N-oxide, 8-aminoquinoline, dibenzyl sulfoxide on the extraction of manganese(II) by 4-benzoyl-3-methyl-1-phe-\u27 nyl-5-pyrazolone (BMPP) in benzene from an aqueous buffered solution was studied. Synergistic enhancement was observed in all systems. Equilibrtum extraction constants and adduct formation constants were calculated. The results showed that synergistic extraction is due to the formation of adducts such as Mn(BMPP)2B where B represents the auxiliary ligand
    • ā€¦
    corecore